850 research outputs found

    How good are projection methods for convex feasibility problems?

    Get PDF
    We consider simple projection methods for solving convex feasibility problems. Both successive and sequential methods are considered, and heuristics to improve these are suggested. Unfortunately, particularly given the large literature which might make one think otherwise, numerical tests indicate that in general none of the variants considered are especially effective or competitive with more sophisticated alternatives

    Finding a point in the relative interior of a polyhedron

    Get PDF
    A new initialization or `Phase I' strategy for feasible interior point methods for linear programming is proposed that computes a point on the primal-dual central path associated with the linear program. Provided there exist primal-dual strictly feasible points - an all-pervasive assumption in interior point method theory that implies the existence of the central path - our initial method (Algorithm 1) is globally Q-linearly and asymptotically Q-quadratically convergent, with a provable worst-case iteration complexity bound. When this assumption is not met, the numerical behaviour of Algorithm 1 is highly disappointing, even when the problem is primal-dual feasible. This is due to the presence of implicit equalities, inequality constraints that hold as equalities at all the feasible points. Controlled perturbations of the inequality constraints of the primal-dual problems are introduced - geometrically equivalent to enlarging the primal-dual feasible region and then systematically contracting it back to its initial shape - in order for the perturbed problems to satisfy the assumption. Thus Algorithm 1 can successfully be employed to solve each of the perturbed problems.\ud We show that, when there exist primal-dual strictly feasible points of the original problems, the resulting method, Algorithm 2, finds such a point in a finite number of changes to the perturbation parameters. When implicit equalities are present, but the original problem and its dual are feasible, Algorithm 2 asymptotically detects all the primal-dual implicit equalities and generates a point in the relative interior of the primal-dual feasible set. Algorithm 2 can also asymptotically detect primal-dual infeasibility. Successful numerical experience with Algorithm 2 on linear programs from NETLIB and CUTEr, both with and without any significant preprocessing of the problems, indicates that Algorithm 2 may be used as an algorithmic preprocessor for removing implicit equalities, with theoretical guarantees of convergence

    A second derivative SQP method: theoretical issues

    Get PDF
    Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which ā€œguidesā€ the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ā„“1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bkā€”a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ā„“1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    Nonlinear programming without a penalty function or a filter

    Get PDF
    A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, and allows inexact SQP steps that do not lie exactly in the nullspace of the local Jacobian. Preliminary numerical experiments on CUTEr problems indicate that the method performs well

    A second-derivative trust-region SQP method with a "trust-region-free" predictor step

    Get PDF
    In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly convex quadratic program with a trust-region constraint. This step is essential for proving global convergence, but its propensity to identify the optimal active set is Paramount for recovering fast local convergence. Thus the global and local efficiency of the method is intimately coupled with the quality of the predictor step.\ud \ud In this paper we study the effects of removing the trust-region constraint from the computation of the predictor step; this is reasonable since the resulting problem is still strictly convex and thus well-defined. Although this is an interesting theoretical question, our motivation is based on practicality. Our preliminary numerical experience with S2QP indicates that the trust-region constraint occasionally degrades the quality of the predictor step and diminishes its ability to correctly identify the optimal active set. Moreover, removal of the trust-region constraint allows for re-use of the predictor step over a sequence of failed iterations thus reducing computation. We show that the modified algorithm remains globally convergent and preserves local superlinear convergence provided a nonmonotone strategy is incorporated

    On implicit-factorization constraint preconditioners

    Get PDF
    Recently Dollar and Wathen [14] proposed a class of incomplete factorizations for saddle-point problems, based upon earlier work by Schilders [40]. In this paper, we generalize this class of preconditioners, and examine the spectral implications of our approach. Numerical tests indicate the efficacy of our preconditioners

    On solving trust-region and other regularised subproblems in optimization

    Get PDF
    The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, MorĀ“e and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problems
    • ā€¦
    corecore